131 research outputs found

    An Approach to Model Checking of Multi-agent Data Analysis

    Full text link
    The paper presents an approach to verification of a multi-agent data analysis algorithm. We base correct simulation of the multi-agent system by a finite integer model. For verification we use model checking tool SPIN. Protocols of agents are written in Promela language and properties of the multi-agent data analysis system are expressed in logic LTL. We run several experiments with SPIN and the model.Comment: In Proceedings MOD* 2014, arXiv:1411.345

    Heuristic Approaches for Generating Local Process Models through Log Projections

    Full text link
    Local Process Model (LPM) discovery is focused on the mining of a set of process models where each model describes the behavior represented in the event log only partially, i.e. subsets of possible events are taken into account to create so-called local process models. Often such smaller models provide valuable insights into the behavior of the process, especially when no adequate and comprehensible single overall process model exists that is able to describe the traces of the process from start to end. The practical application of LPM discovery is however hindered by computational issues in the case of logs with many activities (problems may already occur when there are more than 17 unique activities). In this paper, we explore three heuristics to discover subsets of activities that lead to useful log projections with the goal of speeding up LPM discovery considerably while still finding high-quality LPMs. We found that a Markov clustering approach to create projection sets results in the largest improvement of execution time, with discovered LPMs still being better than with the use of randomly generated activity sets of the same size. Another heuristic, based on log entropy, yields a more moderate speedup, but enables the discovery of higher quality LPMs. The third heuristic, based on the relative information gain, shows unstable performance: for some data sets the speedup and LPM quality are higher than with the log entropy based method, while for other data sets there is no speedup at all.Comment: paper accepted and to appear in the proceedings of the IEEE Symposium on Computational Intelligence and Data Mining (CIDM), special session on Process Mining, part of the Symposium Series on Computational Intelligence (SSCI

    STANDARD OF PROOF AND RUSSIAN PROCEDURE LAW: UNKNOWN OR WELL-KNOWN?

    Get PDF
    Though Russian legal science discusses the need to introduce standards of proof in procedural branches of law, different branches of Russian procedural law (civil procedure, sports disputes procedure, criminal procedure) do not directly refer to the standards of proof used. However, this does not mean that this procedural law ignores the doctrine of standard of proof. For the constitution of standards of proof used is either the (1) indirect normative indication of the standard or (2) exemplary decisions of higher courts, sports arbitration. It is necessary to move from indirect, actual use in favor of normatively fixed and defined standard at the level of relevant legal acts. However, there is also a formal obstacle. At all branches of Russian procedural law, the judge (arbitrator) has the right to evaluate evidence freely, according to his inner conviction. This postulate is a two-way street; proving in many ways becomes a subjective procedure with an often not very obvious result for the parties to the dispute. There is also no uniformity within the framework of the procedural branches, which are identical in their legal nature. The branches of civil procedure and sports disputes procedure are private law, but in fact the standards of proof used in them are different. Criminal procedure, as a public branch of legal procedure, uses the strictest standard of beyond reasonable doubt in the trial itself, but resorts to a different standard for resolving issues in the preceding stage of the process. The authors in this article provide a brief overview of the standards of proof in Russian procedural law (civil procedure, sports disputes procedure, criminal procedure) and draw conclusions concerning the current legal regulations of the issue and its possible evolution. The legal establishment of the standard of proof in civil procedure law, sports law, and criminal procedure law needs comparatives. Each of the standards of proof has merits, but their number should not be multiplied in the absence of doctrinal justification. That said differentiation of standards for particular stages of the dispute resolution procedure or types of disputes within civil or sports law procedure is not only permissible but seems inevitable. A similar conclusion can be drawn with respect to differentiating standards of proof for the stages of criminal procedure. The formulation of the content of the standards of proof in civil procedure, sports disputes procedure, and criminal procedure has not been completed to date. It seems that several actions need to be taken. First, to clarify the list of actually used standards of proof. Secondly, to identify the current goals and values of each of the three procedural branches of law (distribution between the parties of the burden of proof). Finally, to formulate the standards of proof for each of the named procedural branches: civil, sports, criminal

    Log-based Evaluation of Label Splits for Process Models

    Get PDF
    Process mining techniques aim to extract insights in processes from event logs. One of the challenges in process mining is identifying interesting and meaningful event labels that contribute to a better understanding of the process. Our application area is mining data from smart homes for elderly, where the ultimate goal is to signal deviations from usual behavior and provide timely recommendations in order to extend the period of independent living. Extracting individual process models showing user behavior is an important instrument in achieving this goal. However, the interpretation of sensor data at an appropriate abstraction level is not straightforward. For example, a motion sensor in a bedroom can be triggered by tossing and turning in bed or by getting up. We try to derive the actual activity depending on the context (time, previous events, etc.). In this paper we introduce the notion of label refinements, which links more abstract event descriptions with their more refined counterparts. We present a statistical evaluation method to determine the usefulness of a label refinement for a given event log from a process perspective. Based on data from smart homes, we show how our statistical evaluation method for label refinements can be used in practice. Our method was able to select two label refinements out of a set of candidate label refinements that both had a positive effect on model precision.Comment: Paper accepted at the 20th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems, to appear in Procedia Computer Scienc

    Guided Interaction Exploration in Artifact-centric Process Models

    Get PDF
    Artifact-centric process models aim to describe complex processes as a collection of interacting artifacts. Recent development in process mining allow for the discovery of such models. However, the focus is often on the representation of the individual artifacts rather than their interactions. Based on event data we can automatically discover composite state machines representing artifact-centric processes. Moreover, we provide ways of visualizing and quantifying interactions among different artifacts. For example, we are able to highlight strongly correlated behaviours in different artifacts. The approach has been fully implemented as a ProM plug-in; the CSM Miner provides an interactive artifact-centric process discovery tool focussing on interactions. The approach has been evaluated using real life data sets, including the personal loan and overdraft process of a Dutch financial institution.Comment: 10 pages, 4 figures, to be published in proceedings of the 19th IEEE Conference on Business Informatics, CBI 201

    Abstraction and flow analysis for model checking open asynchronous systems

    Get PDF
    Formal methods, especially model checking, are an indispensable part of the software engineering process. With large software systems currently beyond the range of fully automatic verification, however, a combination of decomposition and abstraction techniques is needed. To model check components of a system, a standard approach is to close the component with an abstraction of its environment. To make it useful in practice, the closing of the component should be automatic, both for data and for control abstraction. Specifically for model checking asynchronous open systems, external input queues should be removed, as they are a potential source of a combinatorial state explosion. In this paper, we close a component synchronously by embedding the external environment directly into the system to avoid the external queues, while for the data, we use a two-valued abstraction, namely data influenced from the outside or not. This gives a more precise analysis than the one investigated in [7]. To further combat the state explosion problem, we combine this data abstraction with a static analysis to remove superfluous code fragments. The static analysis we use is reminiscent to the one presented in [7], but we use a combination of a may and a must-analysis instead of a may-analysis

    Using fairness to make abstractions work

    Get PDF
    Abstractions often introduce infinite traces which have no corresponding traces at the concrete level and can lead to the failure of the verification. Refinement does not always help to eliminate those traces. In this paper, we consider a timer abstraction that introduces a cyclic behaviour on abstract timers and we show how one can exclude cycles by imposing a strong fairness constraint on the abstract model. By employing the fact that the loop on the abstract timer is a self-loop, we render the strong fairness constraint into a weak fairness constraint and embed it into the verification algorithm. We implemented the algorithm in the DTSpin model checker and showed its efficiency on case studies. The same approach can be used for other data abstractions that introduce self-loops
    corecore